Nodal Point of Thought2019-02-18
Header Table of Contents is a tree
Relational connections between fragments
Link from Scrapbox text
The KJ method focuses on seem relevant, not [be similar
Scrapbox does not tag stock associations.
→ Associations, not analogies
https://gyazo.com/d26b02759f38ca9c1564e13af108ed79
Consider that the KJ method is done by a machine, not a human: The machine does the KJ method..
Device for generating associations Suppose R is given
It is a device that takes one fragment as input and returns another fragment
There are already various ways to realize this: Relational connections between fragments.
However, the KJ method finds "Things that might be relevant" from the 100 written down and places them nearby.
I naively thought this "seem relevant" was similarity, but that is incorrect.
In fact, when I teach the KJ method, I repeatedly say, "It's not about collecting similarities.
Examples: Conflict is a close relationship What is a relationship?
Just as Scrapbox, the stock of associations, encourages new associative connections via 2-hop links, so does this search.
From the given 100, we search by association, and the first pair that merges is the "likely related pair".
Confluence of associations
I thought of associative device as having fragments as input and fragments as output
But in the KJ method, "something that might be related" doesn't have to be verbalized at the right time to lay out "why it's related."
Posteriori connected: Relationships where topics are connected....
Isn't it too restrictive for an associative device to output fragments?
Couldn't it be a vector instead of a language?
I thought that, when you look at it as a vector, if you reduce the flow of "language -> vector -> vector -> language", it is "language -> language" after all, and there is only an implementation pattern of an associative device that "involves a vector in the associative process".
Annotation to assign a 2D vector to a set of short sentences = KJ method.
---
This page is auto-translated from /nishio/思考の結節点2019-02-18 using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I'm very happy to spread my thought to non-Japanese readers.